Auto-Generated Summary
> Provide an extensive
summary of the agreements by topic. No need to mention CRs.
General Aspects of AI/ML Framework
- Model Identification and Life Cycle Management (LCM):
Model-ID can be used for LCM operations, with additional conditions
categorized into NW-side and UE-side.
Consistency between training and inference is ensured through model
identification, transfer, and monitoring.
- Data Collection Requirements: Data size and latency
requirements are defined for training, inference, and monitoring across
use cases like CSI compression, beam management, and positioning.
- Complexity Metrics: Computational complexity
metrics such as FLOPs, model size, and number of parameters are adopted
for evaluation.
- Generalization and Scalability: Generalization
performance is evaluated across deployment scenarios, configurations, and
UE parameters.
Fine-tuning/re-training improves performance for new scenarios but may
degrade performance for previous ones.
CSI Feedback Enhancement
- Inference Procedure: Examples of inference
procedures for CSI compression and prediction are provided, including
pre-processing and post-processing steps.
- Training Collaboration Types: Pros and cons of
training collaboration types (Type 1, Type 2, Type 3) are summarized,
including flexibility, extendibility, and compatibility.
- Monitoring Mechanisms: Intermediate KPI monitoring
mechanisms are defined, including options for calculating KPI differences
and monitoring accuracy.
- Baseline Assumptions: Baseline simulation
assumptions for CSI feedback enhancement evaluations are outlined,
including duplex mode, carrier frequency, bandwidth, and channel
estimation.
Beam Management
- Representative Sub-Use Cases: Spatial-domain DL
beam prediction (BM-Case1) and temporal DL beam prediction (BM-Case2) are
studied.
- Performance Results: AI/ML achieves good beam
prediction accuracy with reduced RS/measurement overhead. Measurement errors
and quantization impact accuracy but AI/ML outperforms non-AI baselines.
- Generalization and Realistic Considerations:
Generalization performance is evaluated for unseen scenarios,
configurations, and UE parameters.
Realistic considerations such as measurement errors and quantization are
included in evaluations.
- Monitoring Metrics: Metrics include beam prediction
accuracy, link quality KPIs (e.g., throughput, L1-RSRP), and input/output
data distribution.
Positioning Accuracy Enhancement
- Evaluation Assumptions and Methodology: Both direct
AI/ML positioning and AI/ML-assisted positioning are evaluated using
one-sided models.
Model input types include CIR, PDP, and DP, with varying dimensions.
- Performance Results: AI/ML significantly improves
positioning accuracy compared to RAT-dependent methods, achieving <1m
accuracy in indoor factory scenarios.
- Fine-Tuning Observations: Fine-tuning improves
performance for new deployment scenarios but may degrade performance for
previous ones.
Dataset size requirements depend on the similarity between scenarios.
- Monitoring Methods: Label-based and label-free
monitoring methods are feasible for model performance evaluation.
Remaining Aspects
- Finalization of Text Proposals: Text proposals for
TR 38.843 include detailed descriptions of evaluation assumptions,
methodology, KPIs, and performance results for all use cases.
- Recommendations for Normative Work: Both BM-Case1
and BM-Case2 are recommended for normative work, along with necessary
signaling/mechanisms for data collection, inference, and monitoring.
- Model Complexity: Complexity metrics for beam
management and CSI feedback enhancement are summarized, including model
parameters, size, and computational complexity.
This summary captures the high-level agreements and
observations across various topics studied in the document.